Is a naturalistic account of reason compatible with its objectivity?

How can rational objectivism be reconciled with my principles of naturalism?

Greg Detre

Monday, 5th November, 2001

Dr Tasioulas

 

 

Perhaps part of the problem with current conceptions of rationality is that they may conflate a number of components, which together give rise to what we consider to be rationality. This is a specific application of a broad idea currently popular in cognitive science and philosophy of mind, which goes under many names, including modularity, the multiple drafts hypothesis (Dennett), the society of mind. I will adopt Dennett�s terminology of �multiple drafts�, used to refer to concurrent, restricted processes or modules, which interact, influence each other and compete for control of the system. At different times, different modules will dominate, allowing us to react flexibly in a variety of situations, and to literally �contain multitudes�[1].

Taxonomies of rationality

There are an enormous number of different ways in which we could try and divide up rationality, some resulting in similar taxonomies.

Evolutionary age

The most evolutionarily ancient part of the brain is the hindbrain. Its broad design can be traced evolutionarily to our reptilian past. It links directly to the spinal cord, controlling basic functions like respiration, heartbeat etc. On top of this rests the �mammalian� mid-brain, at a slightly higher level. The highest level is the neocortex, a 6mm surface (in humans) that we share only with the most intelligent species like primates and dolphins. In conjunction with various sub-cortical areas, the cortex is responsible for all higher-level processing. It might be possible in the long-term future to try and separate the various roles played in reasoning by the different areas, though it seems likely that almost all of what we would really term rationality occurs in various parts of the cortex.

will probably bin this paragraph

Linguistic/non-linguistic

There is a definite appeal to the idea that some of our reasoning is non-linguistic in some fashion. This can be taken in several ways:

1.      In opposition to the Language of Thought Hypothesis, the processing of the brain cannot be described in language-like syntactic manipulation of symbols... Relates back to Smolensky�s stronger connectionist thesis???

2.      Sometimes we seem to be performing a task which we only afterwards express in propositional/linguistic terms, e.g. thinking in an imagistic fashion. When faced with two similar but not identical pictures, we are usually able to point out the difference between them. This seems like a clearly non-linguistic piece of reasoning (i.e. �accessing objectively valid truths�), since we certainly don�t appear to be representing every single component, object and shape propositionally and making comparisons.

why is this important??? what bearing would it have on our view of rationality if one or the other did turn out to be definitely the case???

maybe bin this section

Domains of rationality

domain in which you�re using it, ecological, e.g. theory of mind, mathematical, inferential � Tooby & Cosmides etc.

induction vs deduction???

Is it possible that the scientific and the logical domains of reasoning actually reflect fundamentally different rational processes??? Not really, the more I think about it.

Levels of rationality, minimal/idealised rationality, degrees of objectivity

If we could show that the demands placed on our reasoning are somehow less for naturalistic reasoning than for a priori or philosophical reasoning, then we might be able to accept a naturalistic account of the limitations of philosophical reasoning, while avoiding any attempts by philosophers to show that such arguments are self-refuting.

How can we understand this idea of levels of rationality? Is it simply that different domains of thought/discussion place greater or fewer demands on our intellect? That our brains just get them right more often? Or that they are somehow fundamentally, of their nature, easier to grasp, and their conclusions are easier to draw? Or even less satisfyingly, simply because we just have no choice but to take some things as given, otherwise our mental scaffold can never get off the (wholly skeptical) ground?

 

Cherniak provides one approach to considering degrees of rationality. In particular, he is attacking an idealised, (more or less) all-or-nothing conception of rationality that he claims authors like Dennett, Davidson, Quine and Cohen??? support.

He accuses Davidson in �Psychology as philosophy� of saying that we need a �large degree of consistency� but actually arguing for ideal consistency, as does Quine�s translation policy. Inconsistency can be very difficult to unmask, if the logical relations are convoluted, and the inconsistency implicit � also, we tend to compartmentalise our beliefs, only comparing beliefs within a subset.

He attacks Dennett�s claim that �as we uncover apparent irrationality under an Intentional interpretation of an entity, our grounds for ascribing any beliefs at all wanes� � Cherniak argues that this is not the case for above-minimally rational creatures. I�m not so sure � a just-above minimally rational creature might seem to hold only a skeleton few set of beliefs. It�s a matter of degree, really.

Cherniak is looking to explain why intentional explanations are so successful as a means of predicting and understanding others� behaviour. By intentional explanations, he refers to the attribution of a cognitive systems of beliefs, desires, perceptions etc. He wants to show that too weak a conception of rationality is insufficient to explain the success of these intentional explanations, while too strong a conception is unable to either, for different reasons, as well as being wholly inapplicable to human beings in the real world.

His �minimal general conditions for rationality� have to lie between what he characterises as the �assent theory of belief� and the �ideal conditions of rationality�. The assent theory of belief considers that:

An agent believes �all and only those statements which he would affirm�, i.e. that believing a proposition consists simply in having an accompanying �feeling of assent�

Almost anything goes in such a caricatured theory, since it places no inherent consistency constraints, and no system by which inferences can be drawn from a given set of beliefs. As a result, it is quite unable to explain the predictive success of assuming intentionality in other people, since an agent is free to hold any beliefs he chooses � or at least, there is no systematic way of predicting, deducing or explaining which beliefs such an agent would have.

At the opposite end of the spectrum, Cherniak characterises the ideal general rationality criterion as:

An ideally-rational agent with a particular belief-desire set would:

make all of the sound inferences from his belief set

eliminate all inconsistencies that arise in his belief-set

undertake all actions which would, according to his beliefs, tend to satisfy his desires (termed �apparently appropriate actions�)

This leaves no room for �sloppiness�. Sloppiness in Cherniak�s sense is almost a technical term, encompassing all of the factors which undermine our deductive ability. These include: laziness or carelessness; the difficulty of the deduction to be made (i.e. whether it is convoluted, indirect, requiring numerous unrelated-seeming premises); cognitive limitations (e.g. short-term memory); time constraints; and so fundamentally, the �finitary predicament�. We have finite-sized brains, a finite time available to us, and so we are restricted in the number and range of inferences we can consider, let alone draw.

The reason that these idealisations are made is that they allow us to simplify to a manageable level human behaviour sufficiently to formalise it in disciplines which deal with an enormous mass of human interactions, like economics. However, we don�t like them because�

He is particularly keen to attack the idea that an agent actually believes (or infers, or can infer) all consequences of his beliefs.

I don't think that any of the philosophers whom Cherniak accuses of idealising rationality would explicitly accept the premise couched in those terms. It is obvious that that would require infinite resources, since it would probably require analysing some belief-sentences that could not be stated, let alone understood, within the agent�s lifetime.

Cherniak considers the Goldbach conjecture. We have a set of axioms, a conjectured inference, and yet we are unable to tell whether the inference follows deductively. Appeals to more prosaic cognitive limitations like short-term memory, carelessness or simply failing to take into account relevant premises by accident cannot explain our failure. In one sense, the problem is simply that the space of possible mathematical proofs is far far too big for us to be able to search through it. This feels like a simple concession to Cherniak�s statement of our finitary predicament. But I think it concedes far more than that � because the space in which we operate on a daily basis when acting rationally is almost always far huger than we can possible search. And if this is the case, then we simply are not rational in the way that Nagel requires.

The minimal rationality conditions he sets out are:

A minimally-rational agent with a particular belief-desire set would:

make some, but not necessarily all of the sound inferences from his belief set

eliminate some (but not necessarily all) inconsistencies that arise in his belief-set

attempt some, but not necessarily all, of those actions which would, according to his beliefs, tend to satisfy his desires

not attempt most (but not necessarily all) of the actions which are inappropriate given that belief-desire set (termed the corresponding �negative rationality� requirement)

We also know which reasoning tasks are more difficult for humans than others, i.e. a weighting of deductive tasks with respect to their feasibility for the reasoner, so that we can guess which inferences are easier and more likely to be drawn = the theory of feasible inferences. He leaves it as an open question whether the most �obvious� inferences (like modus ponens) could be performed by any creature that qualifies as having beliefs.

We can relate this to the factors Cherniak lists under his theory of feasible inferences. Some inferences are easier and more likely to be drawn as a result of their feasibility for the reasoner. He leaves it as an open question whether the most �obvious� inferences (like modus ponens) would be considered so by non-humans.

Theory of human memory structure � helps you know which beliefs will be recalled when, e.g. whether the premises and rules are active at the time of consdiering a belief/conclusion. Thus, the activated belief subset is subject to a more stringent inference condition than the inactive belief set. Of course, I think it would be an even more powerful theory if it was couched in connectionist terms of association, rather than discrete subsets.

 

In determining whether a person ought to make a given inference in order to be pragmatically rational, you need to take into acount: the soundness of the inference; its feasibility; its apparent usefulness according to the person�s beliefs and desires.

Cherniak skips over the difference between conscious and unconscious inferences, and explicitly makes the assumption that our entire belief-desire system can be expressed as a finite set of (logically-interpretable) sentences.

Others have taken a similar approach, notably Simon and Goldman.

Performance vs competence

Cohen distinguishes between performance and competence, which amounts to the distinction between how well you actually do something, and how well you are (potentially) capable of doing that task. In the same way that a superb sportsman may have an off-day because of lack of sleep, or nerves, our performance as reasoners (e.g. in various psychological tests) may be significantly inferior to our competence on a good day. This could be for a variety of quite prosaic reasons, such as the ones given above for the sportsman, as well as a few deeper and more specific ones.

give better examples of failures of performance

Cohen uses the example of English grammar. Although we regard most speakers of the English language as competent grammarians, insofar as they can distinguish grammatical and ungrammatical sentences very reliably, our performance varies a great deal � we often make grammatical errors during the slapdash flurry of a casual conversation, including grammatical errors that we could identify as errors. Cohen argues that the only way to define the grammar of a language is through the careful intuitions of what Rorty might term an ideal community of speakers of that language. The job of linguists is to systematically describe the sum of careful, intuitive grammatical judgements given by just such a group of intelligent, fluent (probably native) speakers. Where obvious schisms do appear between groups of speakers, then we can say that we have distinguished dialects within the language.

In the same way, �the only way to tell that modus ponens and modus tollens are valid inference rules is that competent thinkers judge arguments of this form to be good ones. Note that this does not mean that competent thinkers will never be misled by the presentation of an argument and fail to recognize that modus tollens is an applicable inference rule.�

This leads convincingly towards a sort of reflective equilibrium view of human reasoning, where normative reasoning criteria are based on the intuitions of ordinary people. These intuitions have been systematically described in greater and greater detail, from Aristotle through to Frege, �constructing a coherent system of rules and principles by which those same people can, if they so choose, reason much more extensively and accurately than they would otherwise�. Thus, �human rationality, in the sense of the possession of a basic competence in judging inferences to be logically sound, follows from the fact that we can only know what the rules of logic are by comparing them to what people intuitively judge to be logically sound.�

The question remains as to whether this process leads to objective truths � after all, it seems to be relativising rationality to a human consensus�

Cohen�s approach allows us to see degrees of rationality in terms of degrees of objectivity(???)

He describes the process by which our rational intuitions are honed in a similar way to Nagel�s description of how we move towards objectivity � we take a subjective view, and attempt to step back from it to �form a new conception which has that view and its relation to the world as its object�.

How do we explain the experimental results

Cohen explains the results of psychological research into human inferential failings as resulting from �either the presentation of the problem, or from subjects' inability to properly encode the logical structure of the task being presented�, both of which are failures of performance rather than competence.

The overall thrust of Cohen's conclusion is that the research on human inferential shortcomings should be construed as showing how subjects can be vulnerable to "cognitive illusions" when problems are presented in unfamiliar ways that interfere with their inferential performance, not as showing that human beings lack the logical competence to deal effectively with reasoning problems, in that they systematically rely on "heuristics" rather than on correct logical rules.(???)

Forward/backward

Finally, I want to consider trying to divide our reasoning process into what I will term �forward� and �backward� reasoning, although the divide might sometimes be better termed �open/closed� or even �sub-conscious/conscious�.

To illustrate the difference, consider the game of chess. When faced with an early or mid-game board position, we can assume that neither a human or (foreseeable) computer player could consider all the legal remaining moves. However, with varying degrees of success (and subject to practice and understanding of the rules), a human player will only have a handful of the best moves presented to his consciousness. The brain has somehow sub-consciously searched forwards from a starting position through an enormous space, of which our awareness is limited to only the most optimal solutions eventually found.

In contrast, if in the post-game analysis your opponent points out his reasons for making or not making a given move (probably couched partly in terms of intentional mental states and partly in terms of the rules, mechanics and future moves of the game), then your comprehension and careful assessment of those reasons is working backwards.

The same distinction works in the case of a philosophical discussion. We might start by outlining the premises upon which everyone agrees. We are now faced with the task of deciding what valuable, substantive conclusions we can safely draw, i.e. reasoning forwards. The vast majority of conceivable inferences will be nonsensical, irrelevant, and/or indeterminable from the premises, but we will hardly even register any of these. Our minds will then be drawn towards a small number of (hopefully) articulable, relevant, contestably sound inferences.

Of these, we will consider, debate, discard and argue for some on the basis of reasons (reasoning backwards), assessing the validity of the arguments. Of course, we might have started with this phase first if someone was reading a paper, or expounding a view, or in another situation where we weren't having to work forwards beating an argument-path for ourselves.

The process continues with a continual revision of the original premises, including and removing, each time intuitively seeing how this affects the possible inferences.

Perhaps one way of putting this is to say that forwards reasoning is the process by which we effectively produce the reasons for one course/choice rather than another, although we are not really literally producing the reasons so much as noting the particular courses/choices for which good reasons exist. The Cartesian method of doubt is perhaps the paradigmatic example of this. Backwards reasoning is the process by which we then carefully elaborate and weigh up the respective reasons behind the available choice, and make our evaluation. Both types of reasoning necessarily complement each other, and we constantly employ them together.

It is important to note that forward reasoning need not be the instantaneous revelation that this outline depicts. Sometimes inferences are immediately visible, and other times it takes considerable time and concentration to see new possibilities (though this effort of concentration does not lift the shroud of mystery under which our subconscious processes operate). This is probably most easily explained at the neural level � neural networks often take many �time-steps� to settle into an attractor, i.e. a solution which satisfies as many of the constraints as possible within which the network has been trained to operate. An analogy is often drawn with a system like a roulette wheel, where the ball takes a considerable time to settle into an equilibrium, i.e. its resting place.

 

However, they both fail at times. In either case, we could be considered to be irrational, but in different ways. In the case of forwards reasoning, a mistake manifests itself as falsely believing an inference to be consistent or entailed by the premises, or just failing to draw all of the interesting, sound inferences from the belief set. In this way, backward reasoning acts as a more methodical check on the results of our forward reasoning. We can all think of times when we have failed initially to see why a mathematical proof or argument follows for given reasons, only for it to become obvious suddenly.

Could they be experimentally separated?

It might be interesting at some point in the medium-term future to consider some sort of neuro-imaging study on whether different parts of the brain, or a different control process is operating, when undergoing purely forward or backward reasoning tasks. The two considerable difficulties with such an approach are that:

1.      We have really very little idea how any sort of high-level cognitive functioning can be understood on a neural level. Disambiguating �reasoning� from all of the processes going on in the brain at once just may not prove possible, or even to be a well-posed or meaningful task. It may be that reasoning simply emerges out of the pandemonium of a number of domain-specific systems, in which case looking for the neural correlate of �pure� reasoning would be as nonsensical as looking for the neural correlate of pattern-matching or the Cartesian theatre � they simply do not exist as separate, definable entities in modern cognitive psychology.

2.      Designing an elegant, reproducible experiment with clear results looks to be a very difficult task. It requires us to be able to conceive of tasks that require the subject to engage in only backward or forward reasoning.

I think it�s difficult and probably a mistake to try and be more specific at this stage about what exactly is going on when we reason forwards � I imagine it as some huge attractor network spanning various central cognitive systems in the brain, incorporating a huge number of cognitive representations of the world at various levels of abstraction, as well as linguistic and sensory information to a greater or lesser extent.

 

could it be that the abstract space being formed is actually much smaller than the total space of grammatical propositions, and that that�s why the search gets quicker � this could explain what forward reasoning really is, and how paradigm shifts, insights and new approaches fit in

 

�arrive at principles that are universal and exceptionless � to be able to come up with reasons that apply in all relevantly similar situations, and to have reasons of similar generality that tell us when situations are relevantly similar�

 

Where does this all leave us?

Nozick

Nozick�s account is attractive in a number of ways. It can be accommodated with minimal metaphysical commitments,

Its price is that it does not really face Nagel head on � Nozick is content to admit that he is not explaining rationality �from first principles�(???) � he is presupposing a degree of rationality in order to consider oneself rationally. And, as I will discuss later, this is the only position that I think we can take as philosophers. On the one hand, we face an empty, skeptical suspension of belief since we recognise that in order to hold any justified beliefs whatsoever, we first require a justified belief about our ability to form such beliefs. And yet, in suspending our belief, we have already recognised that this is the only rational option. In this way, Nagel�s characterisation of �thoughts that we cannot get outside of� is particularly appropriate.

In a way, it�s obvious that we could never monitor our entire brain � with what would we be doing the monitoring? Where can we stand that we can view our position from any position but our own? Can we turn our eyes back upon our own skull (in a more meaningful sense than just the eyeball-rolling party trick)?

So we have little choice but to accept that simply being able to frame the question of one�s own rationality is a sort of base condition for rationality. Doubting is, of necessity, a kind of rational thinking. Descartes� cogito may thus serve instead to bootstrap us (or evidence that we have already boot-strapped) into knowledge of our own rationality.

Perhaps it�s not so much the doubting about questioning one�s own rationality, as simply being able to conceive of rationality at all. Perhaps the complex notion of rationality is its own key. Being able to conceive abstractly of context-independent, formal, generalisable methods and propositions, or perhaps the notions of context-independence, formality, generalisability, method or proposition collectively form the tip of a cognitive framework iceberg comprising a syntax-manipulating, representation-of-representation mind, even a fallible, specialised, evolved one.

 

Incompatibility

My stated intention at the start of this paper was to investigate how easily a naturalistic framework and rational objectivism can accommodate each other. I was hoping and expecting to find that they were incompatible in certain fundamental, ineradicable ways. Given that I feel that we are far better scientists than philosophers, this would have further persuaded me that the reason we disagree on almost all non-empirical issues is that we are not sufficiently rational or powerful thinkers to make real headway in such areas. This would not necessarily be to dismiss out of hand the entire philosophical enterprise, but it would undermine it in those areas where there is no support from other disciplines to provide arbitration in disputes.

As it has turned out, I have found even a restrictive, contemporary naturalistic account to be surprisingly pliable with respect to our rational capacities.

 

Conclusions

our performance is usually pretty low for both backward and forward reasoning

however, our competence at backward reasoning is pretty high, especially for philosophers, given sufficient time

it is our competence at forward reasoning that is low

even given a lot of time, we may fail to draw the most important sound inferences from our belief set, and fail to see some of the subtle inconsistencies

this is much exacerbated by the difficulty we have in making our beliefs explicit

this is an area that Cherniak fails to address � he just assumes that all of our beliefs can be expressed propositionally. I don't think this places too high demands. the problem is that we are usually unable to introspect to exactly what our beliefs are, or we ignore beliefs, or fail to unmask assumptions

this is alluded to by his theory of feasible inferences, which he doesn't elaborate enough � some people are better at expressing areas of their belief set than others

 

in a sense, I�m affirming that there are rational truths in the world, like modus ponens, while trying to explain how we can be capable (competent) of rational access and yet disagree and fail to apprehend such truths sometimes

 

Questions

backward/forward (s) ???

does Cherniak elaborate on why/how individuals� feasibility of inferences differ???

 



[1] Walt Whitman, �Song Of Myself�:

Do I contradict myself?

Very well then I contradict myself,

(I am large, I contain multitudes.)